18 research outputs found

    Sampling-aware Polar Descriptors on the Sphere

    Get PDF
    We present a new descriptor and feature matching solution for omnidirectional images. The descriptor builds on the log-polar planar descriptors, but adapts to the specific geometry and non-uniform sampling density of spherical images. We further propose a rotation-invariant matching method for the proposed descriptor that is particularly interesting for mobile devices. It permits to reduce the computational complexity in the detection phase by eliminating the orientation assignment and to shift it to the feature matching step. We then use a criteria based on the Kullback- Leibler divergence in order to improve the feature matching performance. Experimental results with spherical images show that the new descriptors offer promising performance and improve on SIFT descriptors computed on the sphere or on tangent planes

    Joint registration and super-resolution with omnidirectional images

    Get PDF
    This paper addresses the reconstruction of high resolution omnidirectional images from multiple low resolution images with inexact registration. When omnidirectional images from low resolution vision sensors can be uniquely mapped on the 2-sphere, such a reconstruction can be described as a transform domain super-resolution problem in the spherical imaging framework. We describe how several spherical images with arbitrary rotations in the SO(3) rotation group contribute to the reconstruction of a high resolution image with help of the Spherical Fourier Transform (SFT). As low resolution images might not be perfectly registered in practice, the impact of inaccurate alignment on the transform coefficients is further analyzed. We then cast the joint registration and super-resolution problem as a total least squares norm minimization problem in the SFT domain. A l1- regularized total least squares problem is also considered. The regularized problem is solved efficiently by interior point methods. Experiments with synthetic and natural images show that the proposed scheme leads to effective reconstruction of high resolution images even when large registration errors exist in the low resolution images. The quality of the reconstructed images also increases rapidly with the number of low resolution images, which demonstrates the benefits of the proposed solution in super-resolution schemes. Finally, we highlight the benefit of the additional regularization constraint that clearly leads to reduced noise and improved reconstruction quality

    Intermediate view generation for perceived depth adjustment of stereo video

    Get PDF
    There is significant industry activity on delivery of 3D video to the home. It is expected that 3D capable devices will be able to provide consumers with the ability to adjust the depth perceived for stereo content. This paper provides an overview of related techniques and evaluates the effectiveness of several approaches. Practical considerations are also discussed

    Analysis of Multiview Omnidirectional Images in a Spherical Framework

    No full text
    With the increasing demand of information for more immersive applications such as Google Street view or 3D movies, the efficient analysis of visual data from cameras has gained more importance. This visual information permits to extract some crucial information in scenes such as similarity for recognition, 3D scene information, textures and patterns of objects. Multi-camera systems provide detailed visual information about a scene with images from different positions and viewing angles. The conventional perspective cameras that are commonly used in these systems however, have limited field of view. Therefore, they require either the deployment of many of these cameras or the capture of many images from different points to extract sufficient details about a scene. It increases the amount of data to be processed and the maintenance costs for such systems. Omnidirectional vision systems overcome this problem due to their 360-degree field of view and have found wide application areas in robotics, surveillance. These systems often use special refractive elements such as fisheye lenses or mirror-lense systems. The resulting images however inherit from the specific geometry of these units. Therefore, the analysis of these images with methods designed for perspective cameras results in degraded performance in omnivision systems. In this thesis, we focus on the analysis of multi-view omnidirectional images for efficient scene information extraction. We propose a novel spherical framework for omnidirectional image processing by exploiting the property that most omnidirectional images can be uniquely mapped on the sphere. We propose solutions for three common multiview image processing problems, namely feature detection, dense depth estimation and super-resolution for omnidirectional images. We first address the feature extraction problem in omnidirectional images. We develop a scale-invariant feature detection method which carefully handles the geometry of the images by performing the scale-space analysis directly on their native manifolds such as the parabola or the sphere. We then propose a new descriptor and a matching criteria that take into account the geometry and also eliminate the need for orientation computation. We also demonstrate that the proposed method can be used to match features in images captured by different types of sensors such as perspective, omnidirectional or spherical cameras. We then propose a dense depth estimation method to extract the 3D scene information from multiple omnidirectional images. We propose a graph-cut method adapted to the geometry of those images to minimize an energy function formulated for the dense depth estimation problem. We also propose a parallel graph-cut method that gives a significant speed improvement without a big penalty in accuracy. We show that the proposed method can be applied to multi-camera depth estimation and depth-based arbitrary view synthesis. Finally, we consider the multi-view omnidirectional images with the transformation of pure rotation. We address the view synthesis problem for these images in the framework of super-resolution. Considering also the inaccuracies in the rotation parameters, we solve an optimization problem that jointly estimates the rotation errors and reconstructs a high resolution omnidirectional image. We then extend the minimization problem with a regularization term for improved reconstruction quality with a reduced number of images. Results with both synthetic and real omnidirectional images suggest that the proposed method is a viable solution for super-resolution with omnidirectional images. Overall, this dissertation addresses three important issues of multiview omnidirectional image analysis and processing in a novel spherical framework. Our feature detection method can be used for the calibration of omnidirectional images as well as the feature matching in mobile and hybrid camera networks. Furthermore, our dense depth estimation method can impact the quality of 3D scene reconstruction and provide efficient solutions for view synthesis and multiview omnidirectional image coding. Finally, our super-resolution algorithm for omnidirectional images can promote the development of efficient acquisition systems for high resolution omnidirectional image

    Dense disparity estimation from omnidirectional images

    Get PDF
    This paper addresses the problem of dense estimation of disparities between omnidirectional images, in a spherical framework. Omnidirectional imaging certainly represents important advantages for the representation and processing of the plenoptic function in 3D scenes for applications in localization, or depth estimation for example. In this context, we propose to perform disparity estimation directly in a spherical framework, in order to avoid discrepancies due to inexact projections of omnidirectional images onto planes. We first perform rectification of the omnidirectional images in the spherical domain. Then we develop a global energy minimization algorithm based on the graph-cut algorithm, in order to perform disparity estimation on the sphere. Experimental results show that the proposed algorithm outperforms typical methods as the ones based on block matching, for both a simple synthetic scene, and complex natural scenes. The proposed method shows promising performances for dense disparity estimation and can be extended efficiently to networks of several camera sensors.

    Scale Invariant Features and Polar Descriptors in Omnidirectional Imaging

    Get PDF
    We propose a method to compute scale invariant features in omnidirectional images. We present a formulation based on Riemannian geometry for the definition of differential operators on non-Euclidian manifolds that describe the mirror and lens structure in omnidirectional imaging. These operators lead to a scale-space analysis that preserves the geometry of the visual information in omnidirectional images. We then build a novel scale-invariant feature detection framework for any omnidirectional image that can be mapped on the sphere. We also present a new descriptor and feature matching solution for omnidirectional images. The descriptor builds on the log-polar planar descriptors and adapts the descriptor computation to the specific geometry and the non-uniform sampling density of spherical and omnidirectional images. We further propose a rotation-invariant matching method that eliminates the orientation computation during the feature detection phase and thus decreases the computational complexity. Finally, we show that the proposed framework also permits to match features in images with different geometries. Experimental results demonstrate that the new feature detection method combined with the proposed descriptors offers promising performance and improves on the common SIFT features computed on the planar omnidirectional images as well as other state-of-the art methods for omnidirectional images

    Super-resolution from unregistered omnidirectional images

    No full text
    This paper addresses the problem of super- resolution from low resolution spherical images that are not perfectly registered. Such a problem is typically en- countered in omnidirectional vision scenarios with re- duced resolution sensors in imperfect settings. Several spherical images with arbitrary rotations in the SO(3) rotation group are used for the reconstruction of higher resolution images. We first describe the impact of the registration error on the Spherical Fourier Transform coefficients. Then, we formulate the joint registration and reconstruction problem as a least squares norm minimization problem in the transform domain. Exper- imental results show that the proposed scheme leads to effective approximations of the high resolution images, even with large registration errors. The quality of the reconstructed images also increases rapidly with the number of low resolution images, which demonstrates the benefits of the proposed solution in super-resolution schemes

    Disparity Search Range Estimation: Enforcing Temporal Consistency

    No full text
    This paper presents a new approach for estimating the disparity search range in stereo video that enforces temporal consistency. Reliable search range estimation is very important since an incorrect estimate causes most stereo matching methods to get trapped in local minima or produce unstable results over time. In this work, the search range is estimated based on a disparity histogram that is generated with sparse feature matching algorithms such as SURF. To achieve more stable results over time, we further propose to enforce temporal consistency by calculating a weighted sum of temporally- neighboring histograms, where the weights are determined by the similarity of depth distribution between frames. Experimental results show that this proposed method yields accurate disparity search ranges for several challenging stereo videos and is robust to various forms of noise, scene complexity and camera configurations
    corecore